perm filename ROAD.MSG[D,LES] blob
sn#154773 filedate 1975-04-12 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00006 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ∂12-APR-75 1448 S,LES
C00016 00003 ∂11-APR-75 1138 network site AI
C00039 00004 ∂11-APR-75 1542 network site SRI
C00044 00005 ∂11-APR-75 0718 network site CMUA
C00066 00006 ∂12-APR-75 1019 network site ISI
C00087 ENDMK
C⊗;
∂12-APR-75 1448 S,LES
A Look up the Road to AI
This note attempts to identify and describe links between current
artificial intelligence (AI) research and application areas that are
of direct interest to the Department of Defense. We can approach
such an analysis from at least two directions:
1) TOP-DOWN: describe the science-technology-applications chains.
This is the right approach for making predictions about "What is
possible?" or "How long will it take?".
2) BOTTOM-UP: start with descriptions of what is needed and work
back to the technology base and scientific knowledge that are
required, thus providing answers to questions like "What research
and development activities are important to DoD?".
Since most statements about "what is needed" are usually phrased in
terms of current technology, a strictly bottom-up approach will miss
important opportunities created by technological advances. In the
current situation, we are examining the relevance of some existing
lines of research, which suggests that the topdown approach should be
dominant. Fortunately, there are several AI texts that proceed
generally in that direction [e.g. Nilsson], though they don't go far
enough down for our current purpose. We also have Ed Feigenbaum's
overview, which is a bit more specific [Feigenbaum].
Given that a substantial amount of topdown analysis already exists,
I choose a predominately bottomup approach in the balance of this
note. Of course I cannot pose as an expert on all of DoD's needs,
but I did spend nine of my younger years designing (or attempting
to design) command-control and military intelligence systems.
I believe that I have insights on some important technical problems
and some defects in the ways we usually go about trying to solve them.
WHAT IS THE PROBLEM?
When we think about the likely impact of future technology on
command-control and intelligence (CC&I) systems it is easy to be
convinced that computer and communications hardware will be the
dominant force. While rapid advances in those areas will certainly
offer a number of new opportunities, they will not solve the major
problems in existing systems. Software development and maintenance
tasks are dominant from both cost and system performance standpoints.
By "software" I mean both the buggy and obsolete programs and the
buggy and obsolete data files that they are supposed to work with.
Much of the effort put into CC&I systems has been based on views
similar to the following:
"In the era of ICBMs, we need more timely information for decision
making. Computers process information much faster than people.
Therefore, new command-control and intelligence systems should be
developed around computers."
Many hundreds of millions of dollars were invested in that idea,
especially by the Air Force. What they got were systems that
generally produced less accurate information more slowly and
required more people to operate them than did the earlier manual
methods.
The problem was, and is, that while computer systems are rather
quick and reliable when fed complete and accurate data, they
tend to fall apart when confronted with erroneous or incomplete
data ("garbage in ..."). Extensive programmed checking of inputs
can improve accuracy, but doesn't do much for the speed of updates.
The fundamental problem is that while there are usually only a few
ways of doing a given task correctly, there are an infinite number
of ways of losing. Given that an opportunity for failure exists,
Murphy's Law does the rest.
People are much more resilient. They often recognize bad data and
either make decisions based on available information, given suitable
bounds on the uncertainty, or construct a plan to get the additional
information that is needed. Computers cannot play a major role in
CC&I systems until they develop similar resilience.
Some of the capabilities needed are:
a) "common sense" reasoning about what is possible in the "real
world",
b) ways of representing knowledge and uncertainty that facilitate
internal consistency checking and deduction,
c) generation of plans to accomplish a given goal from sets of
elementary actions.
These are also central tasks of artificial intelligence research.
In summary, bigger and faster computers offer the opportunity to
collect, process, and disseminate increased amounts of bad information.
When substantially better command-control and intelligence systems
are built, they will be based on AI techniques.
BETTER SYSTEMS NOW
While we cannot solve some of the central problems of CC&I systems
yet, there are several tools available that could be usefully
employed.
On reading the recent SRI survey of potential DoD applications for AI
[Stevens], I was struck by the similarity of the given list of
problems to lists that were compiled (by others) ten years ago. Most
of the technology needed to at least partially automate the handling
of these tasks has existed for more than ten years, yet somehow it
hasn't happened. One of the main reasons, I believe, is the shortage
of good interactive computer facilities in military installations.
The SRI survey mentions interactive scene analysis for cartography as
a ripe topic. This would employ AI image understanding techniques to
assist a person in extracting geographic data from photographs. I
agree that this looks like a good bet, but hope that the development
effort can be kept to a modest scale at least for a while. I recall
a similar task that was turned into a $40 million boondoggle awhile
back.
Another possible application area is in data retrieval systems. The
user interfaces for such systems are usually so complex that they can
only be run by experts. It should be possible, using natural
language understanding methods, to develop much more comfortable
"front ends" that will answer questions about the kinds of data files
that are available and what fields they contain. The system would
also assist the user in formulating legal queries and pass them on to
the retrieval programs.
Certain automatic programming techniques also appear to be applicable
to the data retrieval problem. For example, queries that require the
linking of data from two or more files might be answered without
requiring the user to know and specify how it is to be done.
MORE LATER
Over a longer period, we can expect much more from AI research and
related work in formal reasoning. Automatic checking of incoming
information for both internal and external consistency will greatly
enhance the timeliness and accuracy of assimilated data in CC&I
systems.
Program certification techniques (formal proof of correctness) will
supplant our imperfect debugging procedures.
Natural language processes will permit textual reports to be treated
as data inputs, rather than just a collection of words.
As the King of Siam said, "etc., etc., etc..."
TECHNOLOGY TRANSFER
An important question is "How can we encourage the exploitation of
opportunities created by AI research?" The traditional modes of
technology transfer have been through journal articles, conference
papers, the migration of graduate students to governmental and
industrial groups, and through consulting arrangements. A number of
university groups sponsor "forums" through which industrial groups are
briefed on recent developments.
I observe that the Japanese government is generally more aggressive
than ours in encouraging the adoption of new technology. For example,
in the last five years, more than a dozen busloads of Japanese
industrial and university groups have visited the Stanford AI Lab.,
largely under government sponsorship. In the same period, there
have been only a few taxis-full from U.S. industrial groups. It is
my impression, in fact, that the Japanese have derived more in direct
benefits from our research than have American organizations.
It appears to me that a more aggressive technology transfer program
would be beneficial here.
REFERENCES
Feigenbaum, Edward, "Artificial Intelligence Research", in file
AI.RPT [1,EAF] @SU-AI, 1973.
Nilsson, Nils J., "Problem-solving Methods in Artificial
Intelligence", McGraw-Hill, New York, 1971.
<Stevens> VOL2.I @SRI-AI (text file).
∂11-APR-75 1138 network site AI
Date: 11 APR 1975 1436-EDT
From: PHW at MIT-AI
To: LICKLIDER at USC-ISI, RUSSELL at USC-ISI, AMAREL at USC-ISI
To: EARNEST at SU-AI, FEIGENBAUM at CMU-10A, NEWELL at CMU-10A
To: NILSSON at SRI-AI, WINSTON at MIT-AI
BELOW IS A DRAFT VERSION OF THE REQUESTED STUFF. IT MAY
CHANGE OVER THE WEEKEND.
YOURS,
PATRICK
HOMEWORK FOR ARPA AI MEETING
PATRICK H. WINSTON
Any views expressed here are strictly Winston's. They have not been
debugged by either reflection or discussion with other members of the
laboratory.
TWELVE REPRESENTATIVE MIT ACHIEVEMENTS (TO BE INTEGRATED INTO A GENERAL
LIST FOR THE COMMUNITY)
TIME SHARING: Out of necessity, people in AI often do great things in
allied fields that profoundly influence how things are done out there.
Time sharing is the prime example. Another example is our influence on
DEC software in general which, in turn, surely influences the rest of
the computer industry. Stanford's drawing program, created, I believe,
in conjuction with foonly is, I understand, in daily and important use
at DEC and, hence is or will be a general contribution to the strength
of the US computer industry. MIT's LISP machine and the general concept
of the personal computer is likely to be another revolution of the same
order of importance.
LISP/PLANNER/CONNIVER/ACTORS: List processing and fancy control
structures eventually creep into general use after gestation in the AI
community.
EXPERT PROBLEM SOLVING: Perhaps Slagle's greatest achievement was
showing early on that computers can indeed be experts in domains that
require experts. This demonstration no doubt encouraged Moses,
Feigenbaum, and others to repeat the exercise in other important
domains.
SYMBOLIC MATHEMATICS: Properly speaking, most of the development was in
MAC, but nevertheless, MATHLAB's origins are found in AI. To be sure
very little AI ended up in the system, but up one layer of abstraction,
was the idea and faith that a great deal of sophisticated
knowledge about mathematics can be sorted out and put in computationally
useful form. Let's hope those plasma physicists solve the fusion problem
using it!
PERCEPTRONS: AI deserves credit for the bad research it has turned off
as well as the good research it has turned on.
THE COPY DEMO: Copying blocks structures using an eye and hand was
important to the development of an AI base productivity technology in the
same way that Slagle's program was important in the development of
MATHLAB -- as a feasibility demonstration and as a mechanism for
uncovering the deep problems and the scope of the problem.
IMAGE PROCESSING: AI people, again by necessity, have done good work in
this area. This work should have had more of an influence in the image
processing community than it has. This is a transfer problem. Horn's
shape from shading thesis a good example, as is the line finding
work of Binford and Griffith (hmmmmm, maybe Griffith has put some AI
into III's commercial character recognizer?). Horn's and Marr's work on
lightness is another example of very recent vintage.
NATURAL LANGUAGE PROCESSING: Winograd's work, when read critically and
with that of Woods and others, shows that machines can deal with natural
language in small but useful domains. Recently Pratt has shown that the
scope of the machinery needed is smaller than previously expected.
PROGRAM ORGANIZATION: The languages listed above along with the
Minsky-Papert concept of heterarchy has been a mind-expanding collection
of ideas about the organization of programs. We have not done the
transfer work in this area. Shame.
DEBUGGING: We missed inventing structured programming because we became
interested in automatic debugging of programs a few years too late.
However, Sussman and Goldstein have gone considerably beyond Dikstra in
working out the epistemology of procedures and are in a position to
write a great book on human debugging and rules for good programming
practice, would that there were time.
PRODUCTIVITY TECHNOLOGY: Locally the copy demo and Inoue's work on
assembly of a radial bearing with 25 micron tolerances contribute to a
strong pool of work done by Stanford, SRI, and others showing that
force feedback manipulation is a real win. The community should and is
paying attention to exploiting and getting credit for this achievement.
Horn's work on lead bonding demonstrates the viability of immediate use
of machine vision on real production problems now absolutely requiring
human vision.
FRAMES: Unlike the other elements in the list, the tight and obvious
connection to things in which the real world is interested is yet to come. I
think it will come in the context of personal assistant projects and
large file system projects.
PROBLEMS TO BE AVOIDED
* Research goes in cycles. There are times when individuals should go
off to separate corners and think; there are times when groups should
work together toward specific operational objectives; there are times
when group efforts should be terminated, books written, and fresh starts
made. Failure to recognize this has and will waste money.
Beautification of the code and commentary in the MIT copy demo system
and in SHRDL was educational to the people who did it, but there should
have been something better to do.
* In any hard research program, there is a tendency to concentrate
resourses on the easiest of the 10 problems that must be solved -- the
natural result is imbalance and uneven progress. My own personal view
is that control is such a problem. Ideas in control have been developed
to a state of considerable sophistication, while problems in
representation have been neglected by comparison, at least in the area
of machine vision.
* Over the years as AI problems have become harder, scholarship has
sometimes slipped. Some recent theses illustrate new ideas without
demonstrating them. Slagle and Moses experimented with hundreds of
integrals, Evans with scores of geometric analogy problems, Guzman with
many, many line drawings. Today I sometimes see an idea defended by
illustrating its application to some single simple problem any mechanism
would work on. It seems to me that this is a dangerous tendency. AI
people should feel obligated to take their ideas far enough to get a
feel for where they break down. In most cases this involves the hard
work of serious implementation and experiment. Experiment is an
important aspect of AI methodology and must remain so.
* We do have a tendancy to duplicate, but only at a high level of
summary. In the areas in which we work the problems are hard and there is
plenty of room for 2 or 3 approaches to manoever. An exception perhaps
lies in the field of transfer. Transfer works both ways. We not only
need to get them what we have but also need to find out what they want
and need. But transfer is super-time-consuming, and while every group
should try to know what people want and need, some particular group or
groups should become the transfer agent(s) if possible. To be more
specific, the kind of stuff SRI has learned about industry would be an
invaluable public document if it could be produced without stepping on
proprietary agreements.
THE NEXT FEW YEARS
NATURAL LANGUAGE
It seems to me that natural language research is in the group-effort and
reduce-to-practice part of the research cycle. Many good ideas have
come of individual efforts. The field seems ready to gel and spin off
what I call Natural Language Interface Engineering. Locally, Pratt, with
Lingol, Martin with Owl, and Marcus (a graduate student) are working
hard to nail down specific well-defined points on the
complexity/performance curve. Our objective is to put together what
amounts to a handbook with which a cognitive engineer can intelligently
create a natural language interface, given a suitably constrained
domain, perhaps in the form of a particular personal assistant or large
data base module. Otherwise the deep problems in natural language
understanding are really deep problems in the structure and use of
representations.
SPEECH
Possibly speech is now in the same position MIT vision was in just after
the crest of the heterarchy craze: much learned, time to pop up a level,
summarize, write books, throw everything out, and start over. ARPA
should make a real effort to understand how the management of this area
really influenced the work. Some say progress has not been affected
much one way or another, except for some expenditure of effort on
demonstration hacks. Others argue different positions. In any event
caution is advised and the model is clearly incorrect for universal
application. Certainly usefulness is not related linearly to
capability, but, surely, once some threshold is crossed (talent,
computation requirements), magic will happen. This could be McCarthy's
long sought gift to society to whom no great gift has been given since
TV (calculators possibly excepted).
The problem does resemble vision, as I believe the speech people agree,
in that there must be no delusions about what can be accomplished with
sophisticated hard core AI. One really does have to get into those
signals and grub around with some computation.
IMAGE UNDERSTANDING
A crash program in this area may or may not work. My feeling is that a
program run like the speech project would be a probable flop, but I may
change my mind after more preliminary study has been done. Tenenbaum
and I are likely to argue about this and he may convince me, but I
currently hold to my feeling that examples using rivers and vehicle
tracks are seductive and possibly misleading. The cream of any
problem is easy to solve by brainstorming at the blackboard, but
then brick walls are soon encountered. I think image understanding will
require very serious low level vision work and prodigious amounts of
computation not currently available but likely on the ten year horizon.
With Stockham and a few other exceptions, people in the image processing
community have not understood what AI hackers need and have not produced
much that we can use. Immediate education would be a good idea.
A result is that AI hackers have gone out and done some image processing
ork in desperation. Some of this has been truly outstanding and
demonstrates the principle that AI groups can and do do great things in
order to produce tools needed for AI which turn out to be great strides
in allied fields. Horn's work on shading and lightness is a prime
example. The development of time sharing yesterday and the LISP machine
today show how far ranging this can go. It would be wrong to prevent this
by enforced concentration on what is conceived to be hard core AI.
MACHINE VISION
This is a delicate area. We have worked like hell on this and have a
collection of 20 or 30 methods and ideas but still the people in the
trenches have nervous feelings that the surface has just been scratched.
One of the main things learned is how hard the problems are; elementary
vision seems harder than elementary natural language for example. Basic
work is needed before we can promise much. Continued basic research is
called for. An Apollo-type effort expecting a linear relationship between
results and the number of people working in parallel is wrong.
REPRESENTATION
It is like the weather. Everyone talks about it but (almost) nobody
does anything about it. With notable exceptions like the frames paper,
little progress has been made even though I think just about everyone
agrees that description and representation are and have been the key
problems in AI. There is a lot of hard work associated with working
things through. One needs Waltz-like courage to work out structures
containing thousands of facts, but it seems to me that there is a lot of
work of this sort to be done. I think the best areain which to do it is
discussed next.
THE EPISTEMOLOGY OF THE REAL PHYSICAL WORLD
Things support, push, fill, and float. I think we understand abstract
worlds largely in terms of analogy to such elementary physical
phenomena. Understanding electricity in terms of water pipes
illustrates what I mean. This is not a problem for problem-solving
research, but for representation and frame-matching. It is for whatever
people locally call their very basic research subprogram.
CONTROL AND PROBLEM SOLVING
Perhaps these really should not be lumped together. My view is that
control has gone far enough for the moment and should not be a major
thrust. This is true even in vision, where Freuder and others have made
it true with recent progress. Expert problem solving still has some
distance to go with some excellent things to be done. I particularly
have in mind Feigenbaum's stuff on MS and on crystallography and
Sussman's stuff on knowledge based understanding of electronic circuits
which has debugging and design both in mind. I do not think that all of
the AI laboratories should devote themselves to this area however.
LARGE DATA BASES AND INTELLIGENT TERMINALS
Here, I think, is an intelligent pair of applications-oriented research
areas with which one can man the barricades. Here we can promise to put
up or shut up with less risk by far than in image understanding. This,
I think is the right place for natural language interface engineering to
demonstrate its sophistication. Scoring here is our best chance for
hard core AI to make a short range impact on the way real computer
people do things. But we absolutely must deliver if we take it on.
∂11-APR-75 1542 network site SRI
Date: 11 APR 1975 1138-PDT
From: NILSSON at SRI-AI
Subject: ROADMAP
To: LICKLIDER at BBN-TENEX, LICKLIDER at BBN-TENEXA,
To: LICKLIDER at BBN-TENEXB, LICKLIDER at BBN-TENEXD,
To: LICKLIDER at USC-ISI, LICKLIDER at OFFICE-1,
To: FEIGENBAUM at SUMEX-AIM, WINSTON at MIT-AI,
To: NEWELL at CMU-10A, LES at SU-AI, AMAREL at USC-ISI
cc: NILSSON
WE DO NOT, AT THE MOMENT, HAVE A "ROADMAP" FOR AI RESEARCH AND
APPLICATIONS. WE HAVE, HOWEVER, GIVEN CONSIDERABLE THOUGHT TO PLANNING
THE SRI COMPUTER-BASED CONSULTANT (CBC) PROJECT. THE CBC PROJECT SPANS A
GOOD DEAL OF SEVERAL COMPONENTS OF AI, AND WE THINK OUR PLAN IS A REASONABLE
EXAMPLE OF WHAT AN AI RESEARCH PLAN SHOULD LOOK LIKE. A NEARLY FINAL
DRAFT OF THIS PLAN IS AVAILABLE AT SRI-AI ON FILE <STEVENS>
PRO.APP. UNFORTUNATELY THE CHARTS TO WHICH THE PLAN REFERS CANNOT BE SENT
OVER THE NET, BUT I'LL HAVE COPIES TO PASS OUT ON MONDAY.
REGARDING AI APPLICATIONS, WE HAVE ALREADY PREPARED A REPORT
LISTING AND DISCUSSING SEVERAL APPLICATIONS OF POSSIBLE INTEREST TO
DOD. THIS REPORT IS AVAILABLE AT SRI-AI ON FILE <STEVENS>VOL2.I.
REGARDING LISTING THE SCIENTIFIC OBJECTIVES FOR AI AND A PLAN
FOR ACHIEVING THEM, I HAVEN'T BEEN ABLE TO BRING MYSELF TO PUTTING DOWN ON
PAPER, ONCE AGAIN, THE USUAL TRITE-ISMS. I'M AT THE POINT WHERE I'M ALMOST
WILLING TO AGREE TO ANYONE'S SCIENTIFIC PLAN.
REGARDING ACCOMPLISHMENTS:
(1) THERE ARE PROBABLY MORE ACTUAL APPLICATIONS OF THINGS DESCENDED
FROM THE AI LABS (INCLUDING PATTERN RECOGNITION) BEING USED IN VARIOUS
PLACES IN DOD THAN ANY OF US WOULD HAVE IMAGINED. IF IT'S REALLY WORTH
NAMING ALL OF THESE, PERHAPS RAND OR SOMEBODY OUGHT TO DO A STUDY TO
FERRET THEM OUT.
(2) ACCOMPLISHMENTS SHOULD BE MEASURED AGAINST THE ORIGINAL
(PERHAPS IMPLICIT) GOALS FOR AI RESEARCH SET UP WHEN ARPA BEGAN
FUNDING IT. I DON'T BELIEVE ARPA BEGAN FUNDING WITH A VIEW TOWARD
SUPER-IMMEDIATE APPLICATIONS, BUT INSTEAD WANTED TO SET UP "CENTERS-OF-
EXCELLENCE" WHERE TECHNOLOGICAL PROGRESS COULD BE MADE ALONG THE MOST
EXOTIC FRONTIERS OF INFORMATION PROCESSING. THIS GOAL HAS BEEN ACHIEVED TO
THE POINT WHERE IT DOES IN FACT NOW MAKE SENSE TO THINK OF APPLYING
THIS TECHNOLOGY TO MILITARY PROBLEMS. THE QUESTION IS, DOES IT REALLY
MAKE SENSE TO TURN THESE CENTERS-OF-EXCELLENCE INTO ROUTINE APPLICATIONS
HOUSES?
IN LOOKING OVER HEILMEIER'S RECENT MESSAGE TO CONGRESS, I AM STRUCK BY THE
NUMBER OF NON-IPTO ITEMS THAT WILL REQUIRE ADVANCED TECHNIQUES IN
INFORMATION PROCESSING IN ORDER TO WORK WELL. FOR EXAMPLE: SPACE
SURVEILLANCE, AIR VEHICLES, WARNING TECHNOLOGY, SPACE OBJECT IDENTIFICATION,
TARGET ACQUISITION AND IDENTIFICATION, OCEAN MONITORING AND CONTROL,
FORECASTING AND DECISION TECHNOLOGY, EXOTIC SENSORS. WITH THIS MUCH
DEPENDENCE ON INFORMATION PROCESSING, IT WOULD SEEM ONLY PRUDENT TO MAKE
SURE THAT SOME CENTERS OF EXCELLENCE KEEP WORKING AT FULL TILT ON
PUSHING THE BASIC TECHNOLOGY.
SEE YOU ALL MONDAY,
NILS
-------
∂11-APR-75 0718 network site CMUA
**** FTP mail from [A350HS02] (SIMON)
0100 AI ROAD MAP EXERCISE FOR IPTO
00200 FILE: AIPROS.A11
00300
00400 DRAFT
00500
00600 This is a rough draft of the views of Newell and Simon on where
00700 AI stands and where it is and ought to be going. It discusses
0800 briefly:
00900 1) The accomplishments of AI
01000 2) The scientific goals of AI
01100 3) The potential applications of AI
01200
01300 THE ACCOMPLISHMENTS OF AI
01400
01500 The typical form of research in AI is to build intelligent
01600 programs, capable of interesting task performances of one
01700 kind or another. The programs themselves form, of course, one
1800 of the products of the research; but the important products are the
1900 mechanisms, components of intelligence, that have been identified, and
02000 the understanding that has been reached of the characteristics
2100 these mechanisms must possess in order to support intelligent behavior.
02200 Still another product, which will not be emphasized here, is the
02300 light that has been thrown by AI research upon the mechanisms and
02400 processes of human intelligence.
2500
02600 A functional classification of the mechanisms of intelligence might
2700 place them under the following headings:
02800
02900 1) Representation and memory organization
03000 2) Problem solving
03100 3) Perception
03200 4) Language processing
3300 5) Control and processing organization
03400 6) Motor behavior
3500
03600 The category of "language processing" is not quite parallel to
3700 the others, but the topic is of sufficient importance to
03800 justify separate treatment.
3900
04000 Representation
04100
04200 The invention of list processing was one of the earliest
04300 achievements of AI research, but much subsequent research has been
04400 devoted to perfecting that invention and exploring its applications
4500 to the design of intelligent systems. Thus, the organization of
04600 semantic memories, all having list structures as their underlying
04700 mode of representation, has been one of the important areas of
4800 research progress over the past five years. We have learned how
04900 to store a vast variety of information in the form of list structures,
05000 including information derived from natural language inputs and
5100 including also discrimination nets (indexes).
05200
05300 Problem Solving
05400
05500 After the initial demonstration that a machine could be programmed
05600 to solve problems by heuristic search, some of the important subsequent
05700 developments were the programing of means-ends analysis as a central
05800 problem-solving tool, and a gradually growing understanding of how
05900 to control the direction of search (depth-first, breadth-first,
06000 and best-first search). Two broad alternative ways of representing
06100 problem situations have emerged: propositional representation with
06200 inferential search using modal logics, and modeling with search by
06300 model manipulation. In the special realm of theorem proving,
06400 much has been learned about the resolution method: its power and
06500 limitations, and the usefulness of such heuristics as unit
06600 preference and set of support.
06700
06800 Apart from the specific problem-solving systems that have been
6900 built and tested, there now exists a large body of know-how, and a
07000 much smaller body of exact mathematical theory of problem solving.
07100 Under the latter heading would be included theorems about resolution
07200 theorem proving, the alpha-beta procedure, shortest-
7300 path valuation functions, and least-search valuation functions.
07400
07500 Perception
07600
07700 An early period of exploration that emphasized very general
07800 perceptron-like systems has given way to a number of very specific
07900 systems for performing particular tasks of visual and auditory
8000 perception. Handling noisy "natural" inputs (i.e., pictorial
08100 scenes and speech) still poses formidable problems, but major
08200 progress has been made in scene analysis and in speech understanding
08300 utilizing semantic as well as phonetic clues.
08400
08500 There has been an important convergence, especially in the
08600 past five years, between work on perception and work on representation.
08700 This has been sparked by the realization that new information can
08800 only be assimilated successfully with the help of relevant information
08900 that is already stored in semantic memory. Hence, most recent
09000 work in perception (the HEURISTIC COMPILER, MERLIN, "frames")
09100 is aimed at bringing considerable contextual information to bear upon
09200 perceptual processes.
09300
09400 Control and Processing Organization
09500
09600 The first stages of AI research emphasized the exploitation of
9700 flexible list-processing languages with good general facilities
09800 for closed subroutines, recursions and generators. One
09900 important byproduct of these language features has been the
10000 formulation of the ideas of "structured programing," much of whose
10100 concepts and practices are either implicit or explicit in
0200 the programming practices and problem-solving systems of AI.
10300
10400 For the past several years, there has been considerable
10500 experimentation with new forms of program organization. Two
10600 ideas that have attracted particular attention are procedural
10700 embedding (thus blurring the program-data distinction) and the
0800 organization of AI programs as production systems.
10900
11000 Motor Behavior
11100
11200 The robot projects have thrown considerable light on the
11300 requisites for successful motor behavior in natural environments.
11400 In particular, successful perceptual-motor coordination lies at the
1500 heart of building intelligent systems that can behave appropriately
1600 in unprepared environments.
1700
11800 Language Processing
11900
12000 During the initial years of AI research, progress in natural
12100 language processing was hampered by an excessive preoccupation with
2200 syntax. During the past ten years, the situation has changed
12300 dramatically, and a great deal of understanding has been achieved of
12400 methods for using semantic information to achieve language
12500 understanding and to guide language processing.
2600
12700
12800 THE SCIENTIFIC GOALS OF AI
12900
13000 The aims of AI research are defined by the range of tasks that we
13100 would like to be able to perform, and whose performance calls
13200 for intelligence. The research agenda is defined by the distances
13300 that the systems we have built thus far fall short of
13400 the capabilities we would like them to have. We have perhaps come
13500 furthest in devising systems capable of solving relatively well-structured
13600 problems. Perceptual-motor coordination is perhaps the domain in
13700 which we have made least progress. However that may be, there are
13800 important and promising research targets along each of the main
13900 directions of research discussed in the previous section.
4000
14100 Problem solving. There are two important lines to be followed
4200 here (both of which are receiving increasing attention). One is to
4300 design systems that are capable of understanding problem instructions
14400 and of programming themselves to tackle a problem described by such
4500 instructions. The other is to design systems that are capable of
14600 operating in poorly structured problem domains: where the characteristics
14700 of problem solutions are vaguely defined, and where the problem-poser
14800 depends upon the problem solver to evoke from his semantic memory both
14900 relevant design constraints and relevant design information, ideas, and
15000 procedures without detailed instruction.
15100
15200 Representation. Clearly the research problems just mentioned
15300 are also problems in the design of representations. In addition, there
15400 is still considerable question as to what kinds of representations are
15500 most appropriate for the storage of information derived from visual
5600 displays. A major concern in the design of representations is to
15700 provide means of access to the information that is there. This
15800 concern suggests at least two research foci: matching procedures
15900 for finding structures in memory that are similar to perceived structures,
16000 and in general, the indexing of large semantic stores, whether by
16100 matching processes or otherwise.
6200 Perception. The speech-understanding projects appear to provide
16300 a useful continuing model for defining research objects in both
16400 auditory and visual perception. Robot projects, while not currently
16500 fashionable, have the useful feature of setting demanding tasks for
6600 perceptual (and especially visual) components of intelligent systems.
16700
16800 Control and Processing Organization. Our knowledge is still
16900 rudimentary on the consequences, and relative advantages and disadvantages
17000 of merging data and process representations, as against
17100 keeping them relatively distinct. Production systems show considerable
17200 promise, particularly for application to learning systems, but we
17300 still do not know much about how to order a set of productions, or to
17400 combine production systems with other, more conventional, types of
7500 program control.
17600
17700 It should be evident from these brief notes that we find it
17800 easier to define some promising directions of research than to define
17900 specific goals for that research. Traditionally, in AI research
18000 goals have been defined by specifying the behavior we expect a system
18100 to attain (geometry at the high-school level, expert chess, ability
8200 to handle language of such and such complexity, etc.). This mode of
8300 specification has perhaps been formalized most fully in defining
18400 the objectives for the speech understanding projects.
18500
18600 Specifying goals in terms of the desired capability of a system
18700 has a great deal to commend it. It makes it relatively easy to
18800 determine whether or not the goals have been attained, and it
18900 encourages movement in the direction of application (i.e., by
19000 specifying goals in terms of tasks that have real-world importance.
9100 Its main disadvantage is that it does not explicitly acknowledge
19200 the knowledge about intelligent systems that is gained even in
9300 relatively unsuccessful attempts to build such systems.
9400
19500
19600 APPLICATIONS OF AI
19700
19800 In our account of progress in AI, we limited ourselves to the
19900 basic science, and did not mention progress in application. It
20000 is nevertheless easy to list a number of significant applications,
20100 for example:
0200
20300 1) List-processing is now an important computer science software
20400 tool, and has had some effect upon hardware design as well.
0500 2) Heuristic problem-solving techniques have had a number
0600 of important applications in engineering design practice
20700 (e.g., automatic design of electrical devices), and in
0800 industrial engineering (e.g., combinatorial scheduling problems).
20900 3) Heuristic problem-solving systems have been built for analysing
21000 mass spectrogram data, for synthesizing molecules,
21100 and for automating some aspects of chemical engineering design.
21200 4) Programing languages and practices in AI have been a principal
21300 source for the ideas that went into structured programming.
1400 5) Research in automatic programing has produced a system
1500 that is at least at the threshhold of feasibility for data-base
21600 design.
21700
21800 It will be noticed from these examples (and as a comment on the
21900 earlier discussion of research goals) that conceptual advances
2000 (e.g., items 1 and 4) have been at least as important for applications
22100 as have been specific intelligent systems. In spite of this
22200 experience of the past, the recent progress that has been made
2300 (especially with respect to representation and language processing)
2400 holds out increasing promise that we may be able to develop in the
22500 next period of work a larger number of intelligent systems that
22600 perform real-world tasks at levels of competence and costs
22700 that will make genuine applicatons feasible. Most of the applications
22800 that come readily to mind will call for systems with far more
22900 semantic informtion available to them than most of the AI systems
23000 built thus far.
23100
23200
23300
23400 We will halt here, with these rough records of our thinking-aloud
23500 processes, in order to get this draft to you by the Friday noon
23600 deadline. If it is at all possible, we will transmit an elaborated
3700 draft before the Monday meeting.
3800
23900 A. Newell and H. A. Simon
∂12-APR-75 1019 network site ISI
Date: 12 APR 1975 1019-PDT
From: AMAREL at USC-ISI
Subject: CONTRIBUTION TO THE 'ROADMAP IN THE AI AREA'
To: LICKLIDER, EARNEST at SU-AI, FEIGENBAUM, NEWELL at CMU-10A,
To: NILSSON at SRI-AI, WINSTON at MIT-AI
cc: AMAREL
IN THE FOLLOWING I AM GIVING AN OUTLINE OF CURRENT SCIENTIFIC/
TECHNICAL PROBLEMS IN AI (AS I SEE THEM), AND A LIST OF AI APPLICATIONS
OF POSSIBLE SIGNIFICANCE TO DOD - THAT I BELIEVE CAN BE APPROACHED NOW.
I AM ALSO PROPOSING AN APPROACH TO APPLICATIONS-ORIENTED WORK IN THE
AI AREA, AND I AM EXPRESSING CERTAIN CONCERNS ABOUT ISSUES THAT MUST BE
ADDRESSED IN DRAWING A 'ROADMAP'.
THE MATERIAL BELOW IS IN NO WAY COMPLETE. I HOPE IT WILL BECOME
CLEARER IN OUR DISCUSSIONS OF APRIL 14 IN WASHINGTON.
----------
A. SCIENTIFIC AND TECHNICAL PROBLEMS
1. Problems of Representation.
How to represent problems of different types; how to shift
representations; how to acquire and manage knowledge within
a given representational framework; how to coordinate and
effectively use different bodies of knowledge in a domain
(e.g., systematic-scientific knowledge about a system and
also informal, experiential, knowledge about its operation;
two models of a system at different levels of resolution);
how to change stored knowledge on the basis of new data,
operational experience, or beliefs.
2. Problem-solving strategies.
(a) Derivation Problems: How to effectively generate a path
between two specified states (this is the old problem of
heuristic search, but it deserves more work); how to
form plans from operational experience and how to best
use plans; what beyond resolution in mechanical
reasoning (natural inference?).
(b) Interpretation/diagnosis problems: Given a set of data
(signals from sensors, test results, intelligence
information, etc.) find the most plausible hypothesis
about causative agents, underlying processes, chains of
events, etc., in terms of which the data can be
explained.
(c) Formation problems: synthesize a system (e.g. a
program) from given specifications, infer a theory from
a body of experience.
Problems of type (b) and (c) are closely related. They are
central to many 'real life' problem-solving situations.
However, we know much less about them than we know about
problems of type (a). In many large system applications
(e.g. the 'Underwater Listening' problem) problems of the
three types coexist. An important question is how to design
a good integrated system which handles well this variety of
problem types.
Problems of representation (1 above) and questions of
strategy are tightly interdependent. An important question
in complex AI applications is: given a variety of knowledge
in a domain and a specific task at hand - how to focus on
relevant aspects of the knowledge base to handle the task in
an effective way.
3. Systems, Languages and Implementation Methodologies.
How to facilitate communication between a domain expert and
a knowledge base; how to provide the expert - and his
computer science collaborator - with a convenient
environment for specifying, changing and testing systems.
How to implement in efficient ways more powerful control
structures than are presently available.
B. APPLICATIONS
1. Interpretation of Underwater signals (the TTO problem).
2. Maintenance problems; diagnosis/prognosis of malfunctions in
specific systems (including computer systems).
3. Interpretation Aids for Intelligence Analysts (e.g.
inference of patterns of scientific/technical developments
from published material in combination with other 'side
information').
4. Selective summarization of information and recommendation of
courses of action to decision makers in situations where
response time is critical.
5. Logistics and Scheduling problems. Development of heuristic
procedures for significant OR problems (e.g. network
design, resource allocation, warehouse placement).
6. Software design from non-procedural specifications. Program
synthesis and debugging.
7. Development of a modeling facility for
scientific/engineering problems which would include a
library of numerical and symbolic manipulation packages as
well as an intelligent 'front end' which would assist a user
in the development and testing of his mathematical models.
Work with partial differential equations on turbulence or
heat transfer models would be a good initial focus.
Each of these applications involves various mixtures of the
scientific/technical problems discussed in (A) above. In each
case, the most crucial effort is the choice of a knowledge base
and of a way of representing it on the computer.
Work on applications requires close collaboration between
computer scientists and experts in the problem area. The
approach to design and implementation should be responsive to
the fact that the Knowledge base in a domain is not stationary -
usually, it is in a state of flux. Our experience at Rutgers in
AI applications to medicine and psychological modeling (in a NIH
sponsored project) shows how important it is to proceed in
system development both from 'bottom-up' and from 'top-down'. A
reasonable pattern is as follows:
(a) Specific problems in an application area are approached
directly and in depth; existing ideas and AI methods are
adapted to the given situation; where choices have to be
made between the search for general methods on the one hand
and the obtaining of specific results and the building of
prototype systems on the other, the latter approach is
taken. In a second phase, generalization/improvements of
the initial approach takes place. To a great extent they
are influenced by parallel work on,
(b) general systems for flexibly acquiring, managing and using
Knowledge in the domain. This parallel work is essential
for creating sufficiently flexible and useful systems.
Each of the applications that I mentioned will provide a good
environment for work on (a large number of) the scientific and
technical problems of AI. I believe that the dominant factor in
the choice of an application is the expectation of a good,
working, collaborative arrangement between the computer
scientists in a project and experts in the application area.
The success of an application prospect depends heavily on the
dedicated participation of at least one individual expert in the
project - not only for an initial period of general orientation
and advice, but on a continuing basis.
C. APPROACH
In any application area it is essential to combine system design
and experimentation activities with relevant core work in AI. I
think that work on applications can build on substantial
progress already done in AI; conversely, I am convinced that the
challenge of 'real life' applications will invigorate AI and it
will guide it to interesting problems that could not be readily
appreciated in a completely 'sheltered' environment. On the
other hand, it is important to permit basic work (controlled
experiments, special studies, development of general methods and
tools) to grow together with the applications-oriented
activities.
Therefore, each AI group should have a combination of
applications projects and core AI projects. In addition, good
communications and collaborative ties should be established
among the groups and also between each group and various
application-oriented activities. It would help to seek closer
ties with TTO, STO and with other agencies (especially,
intelligence agencies). More work is needed now on the
identification of promising AI applications.
Our experience with AI applications at Rutgers shows that
effective collaborative developments require a fairly
symmetrical commitment between the computer scientist on the one
hand and the 'man with the problem' on the other. A
service-support relationship will not do (in either direction).
It should be the responsibility of the AI groups to seek/create
the appropriate collaborative arrangements.
The ARPANET provides a good medium for real collaboration (in
program development, testing and improvement) and communication.
A program developed on an AI group's machine can be accessed and
tested via the net by the collaborating applications groups. A
tool (language utility program, etc.) developed by one AI group
can be used by another group over the net.
A series of Annual AI Applications Workshops should be
instituted with the dual purpose of technical communications
(including detailed system demonstrations) between AI groups,
and also communications with the 'potential user community'.
D. CONCERNS
AI is at the cutting edge of computer science, and AI groups in
the country have been important centers of education for young
scientists who are advancing the computer field in many ways. I
hope that a redirection of AI activities will preserve as much
as possible this important function.
If the basic aspects of AI are taken out of ARPA supported AI
projects, then it would be extremely difficult to continue
serious AI work in Universities. On the other hand, it is
possible to maintain a high level of University activity and
interest if appropriate mixtures of applications work and basic
work are supported. The detailed control of these mixtures
should be in the hands of the PI's and the senior investigators
- under general guidance from IPTO.
The problem of classified information may create difficulties in
working on AI applications in Universities. This problem may
induce the creation of a separate Institute for AI applications
- of the type advocated by Feigenbaum. The idea of such an
Institute deserves serious consideration. It could consist of a
small permanent group which would be augmented by faculty or
students coming from University AI projects and visiting for
limited periods of time (e.g., a summer, or a semester). The
question of distributing responsibilities between the Institute
and the University AI groups is not simple. It would be
inappropriate to leave all applications work in the Institute
and to restrict the Universities to 'purists only'. The problem
is how to distribute applications activities between an
Institute (where classified work can take place) and a
University group. There has been some experience with this type
of problem in the past - and it is possible that a reasonable
solution can be found in the present case.
----------
THIS IS ALL FOR NOW. SORRY FOR BEING LATE IN SENDING THIS IN.
REGARDS
SAUL AMAREL
-------